4 research outputs found
Benchmarking Distributed Stream Data Processing Systems
The need for scalable and efficient stream analysis has led to the
development of many open-source streaming data processing systems (SDPSs) with
highly diverging capabilities and performance characteristics. While first
initiatives try to compare the systems for simple workloads, there is a clear
gap of detailed analyses of the systems' performance characteristics. In this
paper, we propose a framework for benchmarking distributed stream processing
engines. We use our suite to evaluate the performance of three widely used
SDPSs in detail, namely Apache Storm, Apache Spark, and Apache Flink. Our
evaluation focuses in particular on measuring the throughput and latency of
windowed operations, which are the basic type of operations in stream
analytics. For this benchmark, we design workloads based on real-life,
industrial use-cases inspired by the online gaming industry. The contribution
of our work is threefold. First, we give a definition of latency and throughput
for stateful operators. Second, we carefully separate the system under test and
driver, in order to correctly represent the open world model of typical stream
processing deployments and can, therefore, measure system performance under
realistic conditions. Third, we build the first benchmarking framework to
define and test the sustainable performance of streaming systems.
Our detailed evaluation highlights the individual characteristics and
use-cases of each system.Comment: Published at ICDE 201
Validating psychometric survey responses
We present an approach to classify user validity in survey responses by using
a machine learning techniques. The approach is based on collecting user mouse
activity on web-surveys and fast predicting validity of the survey in general
without analysis of specific answers. Rule based approach, LSTM and HMM models
are considered. The approach might be used in web-survey applications to detect
suspicious users behaviour and request from them proper answering instead of
false data recording.Comment: 14 pages, 4 figure
Limitaciones en Reconocimiento de Entidades Nombradas (REN) basado en Redes Neuronales para la extracci贸n de datos de Curriculum Vitae
We provided research about the abilities of neural network-based NER models, quality, and their limitations in resolving entities of different types of complexity (emails, names, skills, etc.). It has been shown that the quality depends on the entity type and complexity, and estimate \ceilings", which model quality can achieve in the case of proper realization and well-labelled dataset.Proporcionamos investigaci贸n sobre las capacidades de los modelos REN basados en redes neuronales, la calidad y sus limitaciones para resolver entidades de diferentes tipos de dificultad (correos electr贸nicos, nombres, habilidades, etc.). Se ha demostrado que la calidad depende del tipo y la complejidad de la entidad, y estima los l铆mites, que la calidad del modelo puede lograr en el caso de una realizaci贸n adecuada y un corpus bien etiquetado
Benchmarking Distributed Stream Data Processing Systems
The need for scalable and efficient stream analysis has led to the development of many open-source streaming data processing systems (SDPSs) with highly diverging capabilities and performance characteristics. While first initiatives try to compare the systems for simple workloads, there is a clear gap of detailed analyses of the systems' performance characteristics. In this paper, we propose a framework for benchmarking distributed stream processing engines. We use our suite to evaluate the performance of three widely used SDPSs in detail, namely Apache Storm, Apache Spark, and Apache Flink. Our evaluation focuses in particular on measuring the throughput and latency of windowed operations, which are the basic type of operations in stream analytics. For this benchmark, we design workloads based on real-life, industrial use-cases inspired by the online gaming industry. The contribution of our work is threefold. First, we give a definition of latency and throughput for stateful operators. Second, we carefully separate the system under test and driver, in order to correctly represent the open world model of typical stream processing deployments and can, therefore, measure system performance under realistic conditions. Third, we build the first benchmarking framework to define and test the sustainable performance of streaming systems. Our detailed evaluation highlights the individual characteristics and use-cases of each system.Web Information System